专利摘要:
The invention relates to a system for determining a distance to an object, comprising: a solid-state light source for projecting a pattern of spots of laser light to the object in a series of pulses; a detector comprising and a plurality of pixels, for detecting light that pulses the pattern as reflected by the object in synchronism with pulses; and processing means for calculating the distance to the object as a function of the read value generated by the pixels. The pixels are configured to generate readout value by accumulating each pulse of the sequence, a first amount of electrical charge representative of a first amount of light reflected by the object during a first time interval, and a second electrical charge representative of a second amount of light reflected by the object during a second time window, the second time window occurring after the first time window.
公开号:BE1023788B1
申请号:E2016/5799
申请日:2016-10-24
公开日:2017-07-26
发明作者:Dyck Dirk Van;Den Bossche Johan Van
申请人:Xenomatix Nv;
IPC主号:
专利说明:

System and method for determining the distance to an object Field of the invention
The present invention relates to the domain within which a system determines the distance to an object, in particular systems based on time-of-flight (ToF) for characterizing a scene or a part thereof.
Background
Within the field of research on remote sensing technology and especially when taking high resolution images of the environment, for use in numerous control and navigation applications such as, but not limited to, the automotive sector, the industrial sector, gaming applications and card applications, it is known to use a detection system based on a back-and-forth time measurement to determine the distance of objects from a sensor.
ToF-based techniques include the use of radio frequency modulated sources, image sensors with a time-adjustable integration window and absolute reciprocal time measurements. For the use of radio frequency modulated sources and image sensors with a time-adjustable integration window, it is necessary to illuminate the specific field of view with a modulated or pulsed source. Absolute ToF (DToF) systems, like most LIDARs, mechanically scan the environment with a pulsed light source, the reflection of which is observed with a pulse detector.
To identify a correlation between the transmitted RF modulated signal and the detected reflected signal, the transmitted signal must meet a number of requirements. In practice, it is precisely these requirements that make the RF modulated detection system very impractical in terms of their usability and applicability in vehicles: the maximum reachable test distance is very limited for signal intensities that must comply with the standard safety limits of permitted powers for, for example, use in regular vehicles.
A DToF vision sensor, as used in most LIDAR systems, typically consists of a powerful pulsed laser (operating in a nanosecond pulse regime), a mechanical scanning system (for obtaining a 3D map from the 1D point measurements) and a pulse detector. This type of environmental measurement system is currently available from suppliers such as Velodyne Lidar from Morgan Hill, California. Velodyne HDL-64E is an example of a high technology system that uses 64 high-power lasers and 64 detectors (avalanche diodes) in a mechanically rotating structure that operates at 5 to 15 revolutions per second. The optical power required for these DToF LIDAR systems to achieve a certain measurement range acceptable for automotive applications is too high to implement in semiconductor technology, the power of which is typically a factor of 5 to 6 orders lower. In addition, the use of mechanically rotating elements limits the possibilities for miniaturization, reliability and cost reduction of this type of system.
The American patent application with publication no. 2015/0063387 in the name of Trilumina describes a VCSEL that supplies a total energy of 50 mW in a pulse with a pulse width of 20 ns. The commercially available Optek OPV310 VCSEL delivers a total energy of 60 mW in a pulse with a duration of 10 ns. Based on extrapolation, the maximum optical power can be estimated at 100 mW. This value is only realized under very strict conditions, in other words an optimum duty cycle and a short pulse width must be set to avoid instability due to thermal problems. Both the Trilumina and the Optek system illustrate that VCSEL systems achieve their physical limitations with regard to the achievable optical peak power in continuous mode due to thermal limitations inherent in VCSEL technology.
These order sizes of available pulse energy for use in ns pulses as set in current DToF applications, ensure that the number of expected photons reflected by the object from a distance of 120 m is so low that the detection threshold for conventional semiconductor-based detection sensors such as CMOS, CCD or SPAD array are not achieved.
Since the thermal and physical limits of semiconductor-based lasers such as VCSEL are nearly achieved, increasing the VCSEL capabilities by 5 or 6 orders, as would be necessary to reach the range of the conventional DToF, is physically impossible. Even the use of avalanche diodes (AD or SPAD), which are theoretically sufficiently sensitive to detect single recurring photons, is not enough to simply plug in the known DToF LIDAR systems to solve the problem of distance range. A solid state implementation of a series of SPADS must be read out serially. In addition, a large number of SPADS is required to achieve the desired measurement accuracy. However, the semiconductor design implies a limitation on the bandwidth of the system, making it unsuitable to achieve the desired accuracy for the application in question. For accuracies such as those of the Velodyne system (0.02 m to 0.04 m, regardless of the distance), the required readout speed exceeds the practically implementable bandwidth in case of current state of IC implementations. To be operational with sufficient accuracy at 120 m, a SPAD matrix of 500x500 pixels is required, which must be serially read out in an IC-based implementation. In addition, to achieve the same precision as the aforementioned Velodyne system, 1000 pulses per millisecond are required, corresponding to 1,000 frames per millisecond. This translates into a reading of 250 Gigapixels per second.
It is assumed that this is not technically feasible in the context of current SPAD IC technology.
The publication written by Neil E. Newman et al., "High peak power VCSELs in Short Range LIDAR Applications", Journal of Undergraduate Research in Physics, 2013, http://www.jurp.org/2013/12017EXR.pdf, describes VCSEL based LIDAR system. The paper shows that the maximum output power of the prototype described was not sufficient for a wide-angle LIDAR with a range of more than 0.75 m. With a relatively focused beam (0.02 m point size at 1 m distance), the authors were able to detect object at a maximum distance of 1 m.
The above examples clearly demonstrate that the optical power emitted by the current state of semiconductor laser technology does not meet the energy requirements required for use in the known LIDAR systems and so can be practically used in automotive applications (e.g. for a range up to 120 m).
U.S. Patent No. 7,544,945 to Avago Technologies General IP (Singapore) Pte. Ltd., describes a vehicle-based LIDAR system and method based on multiple lasers to obtain a more compact and more cost-effective LIDAR functionality. Each laser in an array of lasers is sequentially activated. Corresponding optical elements, attached to the array of lasers, then project the respective emitted rays in considerably different directions. The light from these bundles is reflected by objects in the environment of the vehicle, and then detected to provide information about the objects in the environment to the vehicle and / or the passengers. The patent describes a semiconductor projector in which the individual lasers are successively activated and optically deflected to replace the known mechanical scanning of the existing DToF LIDAR systems.
A highly accurate medium-distance environment measurement system for vehicles that does not use time-of-flight detection is known from the international patent application WO 2015/004213 A1 in the name of the current applicant. In this publication, the location of objects is based on the projection of pulsed light rays and the analysis of the displacement of the detected spots relative to predetermined reference positions. More specifically, the system of the cited publication uses triangulation. However, the accuracy that can be achieved correlates with the triangulation basis, which limits the further miniaturization of the measurement system.
There is a continuous need for further miniaturization and / or longer measurement distances for complex environmental measurement applications, especially in automotive applications, such as applications for ADAS (driver assistance system) and self-driving vehicles, and this at a reasonable price and in a compact, semiconductor integrated form factor.
Summary of the invention
It is an object of the embodiments of the present invention to be able to realize a further miniaturization and a larger measuring distance in comparison with the existing displacement-based measuring system for automotive applications. Furthermore, it is an object of the intended embodiments of the present invention to be a fully solid-state alternative to the known LIDAR systems.
According to an aspect of the present invention, there is provided a system for determining the distance to an object, which comprises: a semiconductor light source for projecting a pattern of laser spots into the object based on a series of pulses; a detector consisting of a plurality of photon sensitive pixels, said detector being configured to synchronize with projected series of pulses, detecting the object-reflected light coming from the projected pattern of spots; and processing means configured to calculate the distance to the object as a function of the read value deregistered by said pixels due to the detected light; the relevant pixels being configured to generate said readout value by storing - for each pulse of said sequence - a first amount of electrical charge representative of a first amount of light reflected by the object during a first predetermined time window and a second electrical time window charge representative of a second amount of light reflected by the object during a second predetermined time window, the second predetermined time window being after the first predetermined time window.
The present invention is based on the same physical principles as direct time-of-flight distance measurement systems, namely that the light always needs a certain time to cover a certain distance. The present invention uses range gating to determine the distance that a light pulse has traveled and then is reflected by a target object. The present invention is based, inter alia, on the inventors' insight that combining range gating, an at least partially simultaneous spot pattern projection (based on an innovative lighting system) and a low-power semiconductor light source are in a largely miniaturized, full solid state and energy efficient long distance measuring method can be obtained. The term "pattern" as used herein refers to a spatial distribution of simultaneously projected spots. To determine the position of the detected spot reflection in the three-dimensional space, it is necessary to combine the distance obtained via the ranging step with angle information in order to determine the remaining two spatial coordinates. A camera based on a pixel array and with suitable optics can be used to provide the additional angle information, and this by identifying the pixel in which the reflection is detected.
The various embodiments of the invention are based on the deeper understanding of the inventors that, in order to be able to apply spot patterns generated by semiconductor light sources in a LIDAR system for the desired distances, a way must be found to overcome the limitations of optical intensity. bypass. The inventors have found that by extending the pulse duration and by integrating the reflected energy from multiple VCSEL generated light pulses into at least two semiconductor storage reservoirs or at least two pixels, followed by a single readout of the integrated charge, a solid state LIDAR system can be obtained with a considerably larger operating radius than is currently possible with current solid-state implementations. In the follow-up text, the term "storage" will be used to designate a storage reservoir or pixel in which charge is accumulated in response to the detection of photons.
It is an advantage of the present invention that the solid-state light source and the solid-state sensor (e.g., a CMOS sensor, a CCD sensor, SPAD array or the like) can be integrated on the same semiconductor substrate. The solid-state light source contains a VCSEL array or a laser with a specific grid to achieve the desired projection pattern.
In addition, by evaluating the reflected light energy detected in two consecutive time windows, and normalizing it to the total accumulated charge in the two consecutive windows, the effect of varying reflectivity of the object and the contribution of the ambient light can be adequately taken into account in the distance calculation algorithm.
In the pixels, the light incident on the pixels is then collected at the level of the individual charge reservoir or at the pixel level. An advantage of charge accumulation at charge reservoir level is that the read-out noise is minimized, leading to a better signal-to-noise ratio.
The transmission and detection of the series of pulses can be repeated periodically.
In an embodiment of the current system according to the present invention, the first predetermined time window and the second predetermined time window are substantially the same length and they connect back-to-back to each other.
The advantage of this embodiment is that the contribution of the ambient light in the distance calculation formula can be easily canceled by subtracting the accumulated ambient light averaged over the surrounding pixels.
In a specific embodiment, each of the plurality of pixels comprises at least two charge storage reservoirs, and the detection of the first amount of light and the detection of the second amount of light occurs on one of the at least two charge storage reservoirs respectively.
The term "charge storage reservoir" refers to storage space provided in the semiconductor substrate, e.g. a capacitor that stores the electrical charges generated by the conversion of photons incident on the pixel. The purpose of this specific form of realization is to achieve a better signal-to-noise ratio, thereby improving the range of the sensor.
According to an aspect of the present invention, there is provided a vehicle on which mounted: a system as described above arranged to measure at least a portion of an area around the vehicle.
The system according to the present invention is particularly relevant in a vehicle with an ADAS or an autonomous control unit such as, but not limited to, ECU (electronic control unit). The vehicle may include a vehicle control unit adapted to receive the measurement information from the system and to use the information for ADAS applications or autonomous driving. The part of the space around the vehicle can comprise the road surface in front of, next to or behind the vehicle. Consequently, the system can generate pre-car profile data to be used for active or semi-active suspension.
According to an aspect of the present invention, there is a camera, the camera includes a system as described above, wherein the system is configured to add 3D data to the camera image based on the measurement data from the system, thereby making it possible to create a 3D image. According to an aspect of the present invention, a method is provided for determining a distance to an object, the method comprising: the use of a semiconductor light source that projects a pattern of spots of laser light to the object in a series of pulses; by means of a detector consisting of a number of pixels which synchronously with said series of pulses detect the light of the pattern of projected spots and reflected by the object; and calculating the distance to the object as a function of read value generated by said pixels as a result of the detected light; the relevant pixels being configured to generate said readout value by storing - for each pulse of said sequence - a first amount of electrical charge representative of a first amount of light reflected by the object during a first predetermined time window and a second electrical time window charge representative of a second amount of light reflected by the object during a second predetermined time window, the second predetermined time window being after the first predetermined time window.
In a realization form of the method according to the present invention, the first predetermined time window and the second predetermined time window are of substantially equal duration and are connected back-to-back to each other.
In a realization form of the method according to the present invention, each of the plurality of pixels comprises at least two charge storage reservoirs, and wherein detecting the first amount of light and detecting the second amount of light respectively occurs on one of the at least two charge storage reservoirs.
In one embodiment of the method according to the present invention, projection, detection and calculations are periodically repeated.
According to an aspect of the present invention, there is provided a computer program product containing the code configured to cause a processor to perform the above method according to the above method.
The technical effects and advantages of the embodiments of the camera, the vehicle, the method and the computer program product according to the present invention correspond, mutatis mutandis, with those of the corresponding realization forms of the system according to the present invention.
Brief description of the figures
These and other aspects and advantages of the present invention will now be described with reference to the accompanying drawings, in which:
Figure 1 represents a diagram of an embodiment of the method according to the present invention;
Figure 2 schematically represents an embodiment of the system according to the present invention;
Figure 3 represents a time diagram for light projection and detection in the embodiments of the present invention;
Figure 4 shows diagrams of an example pixel output as a function of the incident light intensity as obtained with logarithmic tone mapping (above) and multi-linear tone mapping (below);
Figure 5 represents a diagram of an example of pixel outputs as a function of the incident light intensity as obtained with a high dynamic range multiple output pixel; Figure 6 schematically shows the structure of a high dynamic range pixel for use in an embodiment of the present invention;
Figure 7 illustrates schematically an embodiment of a pixel architecture with two charge reservoirs (storage) each with a separate transfer gate for use in the various embodiments of the present invention;
Fig. 8 schematically represents a first example of an optical embodiment for use in the embodiment of the present invention;
Figure 9 schematically illustrates a second example of an optical arrangement for use in the embodiment of the present invention;
Fig. 10 schematically illustrates a third example of an optical arrangement for use in an embodiment of the present invention; and
Figure 11 schematically illustrates a fourth example of an optical embodiment for the present invention.
Detailed description of the different realization forms
The environmental measuring systems of the type described in the international patent application WO 2015/004213 A1, in the name of the applicant, has the advantage of measuring an extensive scene when it is simultaneously or partially simultaneously exposed in a number of discrete and well-defined spots, in particular on a predetermined spot pattern. By using VCSEL lasers with excellent beam quality and a very narrow spectrum, it is possible to measure a certain scene even with limited power, and even in the presence of daylight. The actual distance measurement performed in the system of WO 2015/004213 A1 is based on a measurement based on displacement detection, in particular triangulation. This was seen as the only method that can be practically realized within the context of the long (quasi-stationary) pulse duration that is necessary because of the 'low' power budget. Until now it was not possible to achieve the same power / performance characteristics with a compact, semiconductor-based time-of-flight system.
The present invention overcomes this limitation by radically changing the way the time-of-flight system works. The invention increases the total amount of light energy emitted for each time-of-flight measurement (and thus the number of photons available for detection at the detector per time-of-flight measurement) by increasing the duration of the individual pulses and by producing of a virtual "composite pulse" consisting of a sequence of a large number of individual pulses. This bundling of longer pulses gives the inventors the required amount of light energy (photons) in order to realize the desired measuring range with a low VCSEL power.
While an individual pulse of the known LIDAR systems has a duration of 1 ns, the systems of the present invention will take advantage of a considerably longer pulse to partially compensate for the relatively low power of the semiconductor lasers such as VCSELs; in embodiments of the present invention, individual pulses within a sequence may have an exemplary pulse duration of 1 ps (this is one possible value, chosen here to keep the description clear and simple; in general, the various embodiments of the present invention may pulse duration of, for example, 500 ns or more, preferably 750 ns or more, most preferably 900 ns or more). In an example according to the present invention, a sequence can have 1000 pulses, which sum up to a duration of 1 ms. Since light requires approximately 0.66 ps time to reach a target at a distance of 100 meters back and forth to the detector, it is possible to use composite pulses of this total duration for distance measurements within this magnitude; those skilled in the art will be able to adjust the required number of pulses as a function of the chosen pulse width and the desired range.
The detection of the sequence of the individual pulse cycle, preferably in synchronization with the VCSEL-based light source, The storage of the charges in the at least one pixel storage reservoir due to the incident photons inserted for the entire sequence prior to the reading. The term "readout value" is hereinafter used to indicate the value representative of the charge (and thus the amount of light received on the pixel) integrated over the sequence. The sending and detecting of the sequence can be repeated periodically.
The present invention works on the basis of range gating. Range gated cameras integrate the detected power of the reflection of the transmitted pulse during the duration of the pulse. The amount of time lapse between the time window of the pulse projection and the arrival of the reflected pulse is dependent on the time required by the light pulse to return, and thus on the distance traveled by that pulse. In other words, the integrated power is correlated with the distance the pulse travels. The present invention uses the principle of range gating, as applied to the sequence pulses described above. In the following description, the integration of the individual pulses within a sequence in the storage reservoir of a pixel to obtain a measurement of the entire sequence is implicitly understood by this.
Figure 1 illustrates an overview of an embodiment according to the method of the present invention. Without loss of generality, the ranging method is described with reference to a range gating algorithm. In a first time window 10, the method consists of projecting 110, a pattern of spots of laser light (e.g., a regular or irregular spatial pattern of spots) from a light source, which comprises a solid-state light source 210, to any objects in the target area of the environment. The spatial pattern is repeatedly transmitted in a series of pulses.
As indicated above, the semiconductor light source may consist of a VCSEL array or a laser with a grid adapted to project the desired pattern. In order for the system to function optimally even at great distances and with high levels of ambient light (e.g. during the day), a VCSEL for use in the embodiment of the present invention is preferably set to project a maximum optical power per spot per unit from the surface. In other words, lasers with a good bundle quality (low M2 factor) are preferred. A greater preference would be for lasers with a minimal spread on the wavelength; A small wavelength spread can be achieved with monomode lasers. In other words, the wavelength is substantially the same and can be reproduced with the necessary spatial and temporal accuracy.
During the same time window in which a pulse is projected, or in a largely overlapping time window, a first amount of reflected light in accordance with the pattern of spots projected on the relevant object is detected 120 on a detector, which is preferably mounted as close as possible to the light source . The Synchronicity or near synchronicity between the projection 110 of the spot pattern and the first detection 120 of the reflection is illustrated in the overview scheme by the side-by-side representation of these steps. In a subsequent second predetermined time window 20, a second amount of light representing the reflected light is detected by detector 130. During this second window 20, the solid-state light source is turned off. The distance to the object can then be calculated 140 as a function of the first amount of reflected light and the second amount of reflected light.
The first predetermined time window 10 and the second predetermined time window 20 are preferably a back-to-back window of substantially the same duration, so as to facilitate the scanning of noise and ambient light by subtracting one of the detected light quantities. An example of a time schedule is described in more detail below in conjunction with Figure 3.
The detector consists of a plurality of pixels, that is, it consists of a pixel grid formed with adequate optics to project an image of the environment (including the spots) onto the pixel. The term "pixel" as used herein may refer to an individual photosensitive area or storage reservoir of a pixel or an entire pixel (which may include multiple storage reservoirs, see below). For each separately projected spot, the detection 120 of the first amount of light and the detection 130 of the second amount of light is done simultaneously with one or the same group of the number of pixels.
Without loss of generality, each of the pixels may be a pixel containing at least two charge storage reservoirs 221, 222 such that the detection 120 of the first amount of light and the detection 130 of the second amount of light on the respective charge storage reservoirs 221, 222 of the same pixel or pixel group occurs.
Figure 2 shows a schematic representation of an embodiment of the system according to the present invention with respect to an object 99 in the environment. The system 200 consists of a solid-state light source 210 for projecting a pattern of a series of spots that can be periodically repeated on the object 99. A detector 220 is arranged near the light source and configured to detect light reflected from the object .
The reflected light beam from object 99 is shown as a dotted line arrow traversing a path from the light source 210 to the object 99 and back to the detector 220. It should be noted that this representation is intended to be strictly schematic and not indicative of e real relative distances or angles.
A synchronizing means 230, which may include a standard clocking or oscillator, is configured to control the solid-state light source 210 to project the pattern of spots on the object in the first predetermined time windows 10 and to switch on the detector 220 send a first amount of light to be incident on the light from the spots reflected on the object 99 in substantially the same time window. In addition, it drives the detector 220 to detect a second amount of light reflected from the spots on the object 99 during a respective subsequent second predetermined time window 20. Suitable processing means 240 are configured to calculate the distance to the object as a function of the first amount of reflected light and the second amount of reflected light.
Figure 3 shows a time schedule for light projection and detection in an embodiment of the present invention. For the sake of clarity: there is only a single pulse of the pulse sequence that is periodically repeated in Figure 1, this consists of a first time window 10 and a second time window 20.
As can be seen in Figure 3a, during the first time window 10, the solid-state light source 210 will be in "on" state, projecting the pattern of spots onto the environment. During the second time window 20, the solid state light source 210 is in the "OFF" state.
The incident light on the detector 220 is delayed with respect to the start of the projection by a time that is proportional to the distance traveled (about 3.3 ns / m in free space). Due to this delay, only a portion of the reflected light is detected in the first storage reservoir 221 of the detector 220, which is only opened during the first time window 10. The accumulated charge in this first storage reservoir stored during the ON period (the first time window 10) consists in part of the noise and the ambient light incident on the pixel before the arrival of the reflected pulse, and part consists of the noise, the noise ambient light and first part of the reflected pulse.
The last part of the reflected pulse is detected by the second charge reservoir 222 of the detector 220, which is activated only in the second time window 20, which preferably immediately follows the first time window 10. The accumulated charge in this second charge reservoir during the period of activation (the second time window 20) consists of a portion that reproduces the noise, the ambient light and the trailing edge of the reflected pulse, and a portion that represents only the noise and the ambient light incident on the pixel after the incident of the reflected pulse.
The greater the distance between the reflective object 99 and the system 200, the smaller the ratio of the pulse detected in the first charge reservoir 221 and the greater the proportion of the pulse detected in the second charge reservoir 222.
If the first edge of the reflected pulse arrives after closing of the first well 221 (i.e., after the end of the first time window 10), then the portion of the reflected pulse detected in the second charge reservoir 222 will decrease at greater flight delay.
The amount of charge A, B obtained in each of the respective wells 221, 222 for the different distances of the object 99 is shown in Figure 3b. To simplify the representation, the effect of the attenuation of the light with the distance, according to the square of the distance, is not taken into account when drawing up the diagram. It is clear that for the time of flight delays concerning the total duration of the first time window 10 and the second time window 20, the time of flight delay can in principle be unambiguously deduced from the values of A and B:
For time or flight delays up to the duration of the first time window 10, B is proportional to the distance to the object 99. To easily determine the absolute distance, the normalized value B / (B + A) can be used , this reduces the impact of any form of non-perfect reflectivity of the detected object and of inversely proportional to the square.
For the time of flight delays covering more than the duration of the first time window 10, A consists of daylight and noise contributions (not shown) and CB is practically proportional (after correction for the dependency of the distance squared) with the distance from the object 99, where C is an offset value.
While Figures 3a and 3b represent the principle of the invention with respect to a single pulse in the time window 10, it will be appreciated that the pulse shown is part of a series of pulses as defined above. Figure 3c schematically illustrates some exemplary features of the times of such a sequence. As shown, the projection scheme 40 consists of a repeated projection of a series of 30 individual pulses 10. The width of the individual pulses 10 is determined by the maximum measuring range. The complete sequence can be repeated with a frequency of, for example, 60 Hz.
The raging system according to the present invention can be integrated with a triangulation system in accordance with WO 2015/004213 A1. If miniaturization is pursued, the triangulation-based system will have a relatively small distance between the projector and the detector, resulting in a limited operating radius. But it is precisely at this short distance that advantages occur, precisely because the triangulation system can measure the distances that the time-of-flight based system cannot measure with sufficient accuracy.
The entire ranging process can be iteratively repeated, with the intention of monitoring the distance to the object in time. The result of this method can be used in the process where the distance information to the detected objects is required on a continuous basis, such as advanced ADAS, active suspension vehicles or autonomous vehicles.
For all elements of the system to function optimally as described, the system must be thermally stable. Thermal stability prevents, among other things, unwanted shifts in wavelength of the optical elements (thermal drift), these would make the proper functioning of the optical filters and other elements of the optical system work sub-optimally. Realization forms of the system according to the present invention strive for thermal stability through either the design, or active T and heat based on a temperature control loop with PID-based control loop. WO 2015/004213 A1 describes various techniques to minimize the amount of ambient light that reaches the pixel during the detection intervals, thereby improving the accuracy of the detection of the laser spot pattern. Although these techniques have not been described in the context of a LIDAR system, the inventors of the present invention have found that various such techniques give excellent results in combination with embodiments of the present invention. This applies in particular to the use of narrow band filters at the detector and the use of suitable optical compositions to ensure an almost perpendicular incidence of the reflected light on the filters. The details of these improvements as described in WO 2015/004213 A1 are hereby incorporated by reference. Further features and details are provided below.
Although the various techniques known from WO 2015/004213 A1 can be applied to the embodiment of the present invention to minimize the amount of ambient light reaching the pixels at the detection intervals, a certain amount of ambient light cannot be avoided. In a multi-pixel system, only a few pixels are illuminated by reflected spots, while others are illuminated by the only remaining ambient light. The signal levels of the last group of pixels can be used to estimate the contribution of the ambient light to the relevant signal, and to subsequently subtract this contribution from it. Additionally or alternatively, background light or ambient light can be subtracted from the detected signal at the pixel level.
There are two exposures for this, one during the arrival of the laser pulse and one in the absence of a pulse.
In some realization forms, the detector may be a high dynamic range detector, i.e. a detector having a dynamic range of at least 90 dB, preferably at least 120 dB. The presence of a high dynamic range sensor, ie a sensor that can operate with a large amount of photons without saturation, while at the same time the camera has sufficient discrimination to detect the intensity levels in the darkest part of the scene, is an advantage for use in such a sensor; it allows a sensor that has a large distance range and is still able to detect objects at a short distance (when the reflected light is relatively strong) without saturation. The inventors have found that the use of a true high dynamic range camera is cheaper than the use of a sensor that uses tonemapping. In tonemapping, the sensor is linearly compressed to a higher resolution range. Various compression methods are documented in the literature, such as logarithmic compression or multilinear compression (see Figure 4). But this non-linear compression requires relinearization of the signals for performing logical or arithmetic operations on the recorded scene in order to extract the relevant information. The solution according to the present invention increases the detection accuracy without increasing the calculation requirements. It is an advantage of some realization forms to use a fully linear high dynamic range sensor as shown in Figure 5. A pixel architecture and an optical detector capable of providing the desired dynamic range characteristics so are described in US Patent Application Publication No. US 2014/353472 A1, in particular points 65-73 and 88, the contents of which are by reference included for the purpose of introducing to those skilled in the art this aspect of the present invention.
Embodiments of the present invention use a high dynamic range pixel. This can be achieved by a large full-well capacity of the charge reservoir or by designs that limit the electronic noise per pixel or by using CCD ports that do not add noise to the charge transfer, or by a design with a large quantum efficiency (DQE). ) (for example, in the order of 50% for single front exposure or 90% for rear exposure, also known as back thinning) or by a special design as shown in Figure 6 (see below), or any combination of the listed improvements. Moreover, the dynamic range can be further increased by adding an overflow capacity to the pixel in front overlay structure (this embodiment is again required back thinning). Preferably, the pixel design is implemented with an anti-blooming mechanism.
Figure 6 gives a schematic representation of an interesting embodiment of a pixel with a high dynamic range. The example in this figure uses two storage ports 7, 8 connected to the floating diffusion. After exposure, the electrons generated by the scene and the laser pulse are transferred to the floating diffusion with the transfer port 11. Both Vgatel and Vgate2 gate voltage are set to high. The charges are then evenly distributed over both capacitors, and achieve a substantial Full Well. Once this high full-well data has been read out after the amplifier, the voltage Vgate2 is placed on low. The electrons flow back towards capacitor 7, whereby the total pixel gain is increased. The data can be read via the amplifier. It is further possible to obtain an even greater gain by applying a lower voltage to Vgatel later. The electrons flow back to the floating diffusion 2.
Figure 7 shows a possible double well or dual bin or double charge reservoir embodiment of an intended pixel for use in CMOS technology. The incident signal is distributed over two charge reservoirs. Each reservoir has a separate transfer port controlled by an external pulse that is synchronized with the pulse from the laser sources.
Figures 8-10 illustrate cameras that can be used in the embodiments of the invention, wherein the light source emits monochromatic light and wherein at least one detector is equipped with a corresponding narrow bandpass filter and optically arranged to change an angle of incidence on this narrow bandpass filter, to limit the angle of incidence to a predetermined range around a normal of one of the major surfaces of this narrow bandpass filter, this optic consisting of an image-side telecentric lens. The term "camera" is used here as a combination of a sensor and associated optics (lenses, lens arrays, filter). In particular in Figure 9, the optics further comprises a minilens matrix disposed between the image-side telecentric lens and the at least one detector, so that the individual minilenses of the minilens matrix capture the incident light on respective photosensitive areas of the individual pixels of the at least pixels drop one detector. It is an advantage of this mini lens-per-pixel arrangement that the loss due to the fill factor of the underlying sensor due to optical guidance can reduce all incident light to the photosensitive portion of the pixels.
These examples all result in an exposure that travels substantially the same distance through the filter medium or in other words, that the incident exposure falls substantially perpendicularly to the filter surface, in other words, it is limited by an angle of incidence within a predetermined range around the normal of the filter surface, as a result of which accurate filtering occurs within a narrow bandwidth, for example for filtering daylight, sunlight and thus the reflected spots exceed daylight.
The correction of the angle of incidence is of particular importance in the realization forms of the present invention wherein the entire space around a vehicle is monitored with a limited number of sensors, for example 8 sensors, such that the incident rays extend over a fixed angle of, for example, 1 x 1 wheel. Figure 8 illustrates a first schematic optical device of this type. It consists of a first lens 1030 and a second lens 1040, with approximately the same focal length f, in an image-space telecentric configuration. This means that all main rays (rays passing through the center of the aperture) are perpendicular to the image plane . A numerical example of this is an aperture of 0.16 that corresponds to a cone angle of 9.3 ° (half cone angle). The maximum angle of incidence on the narrow band pass filter 1060 disposed between the lens system 1030-1040 and the sensor 102 would thus become 9.3 °.
As shown in Figure 9, the preferred design consists of a tandem of two lenses 1130, 1140 with approximately the same focal length f, in an image-space telecentric configuration (the configuration is possibly also object-space telecentric), a flat stack of mini-lens array 1150, a spectral filter 1160 and a CMOS detector 102. Because the center O of the first lens 1130 is in the focus of the second lens 1140, any beam that crosses 0 will be refracted by the second lens 1140 in a direction parallel to the optical axis. If one now considers that a particular laser spot S 1110 is at a very large distance with respect to the focal length of the first lens 1130. Thus, the image of this spot 1110 through the first lens 1130 is a point P located near the focal plane of this lens, so that it lies exactly in the center plane of the second lens 1140. The light rays emitted by the spot S 1110 and received by the first lens 1130 form a light cone that converges to the point P in the second lens 1140. The central axis of the light beam passes the point O and simultaneously breaks the optical axis and thus perpendicular to the spectral filter 1160 so as to achieve the optimum spectral sensitivity. Hence, the second lens 1140 acts as a corrective lens for the angle of the incident light beam. The other rays of the cone can also be combined into a radiation beam parallel to the optical axis with a small convex mini-lens 1150 behind the second lens plate 1140 such that the point P is in the focal point of the mini-lens 1150. In this way, all the image rays S1110 are curved in a direction that is substantially perpendicular to the spectral filter. This can now be performed separately for each pixel of the CMOS detector using a series of mini lenses placed in front of each pixel. In this configuration, the mini lenses have an image-telecentric function. The biggest advantage is that the pupil of the first lens 1030 can be enlarged, or the aperture can be removed while compensating for the increase in spherical aberration by a local correction in the optics of the mini lens 1150. This allows the sensitivity of the sensor composition can be improved. A second mini-lens array (not shown in Figure 11) can be added between the spectral filter 1160 and the CMOS pixels 102, to focus the parallel rays and redirect them back to the photodiodes of the pixels to maximize the fill factor.
For the first and second lens 1130, 1140, commercial lenses can be used. Those skilled in the art will appreciate that lenses commonly used in smartphone cameras or webcams of comparable quality can also be used. The above-mentioned iSight camera has a 6x3 mm CMOS sensor with 8 megapixels, 1.5 µm pixel size, a large aperture of f / 2.2, an objective focal length of approximately f = 7 mm, and a pupil diameter of approximately 3.2 mm. The viewing angle is of the order of 1 rad x 1 rad. Assuming that the resolution of the camera is about the same size as a pixel (1.5 micron), we can conclude (according to Abbe's law) that the aberrations of the lens are corrected for all rays of the viewing angle selected by the opening.
Figure 10 shows a variant of the configuration of Figure 11, optimized for production in a single lithographic process. The first lens 1230 is similar to the first lens 1130 of the previous realization form, but the angle correcting second lens 1140 is replaced by a Fresnel lens 1240 with the same focal length. The advantage is that they are completely flat and can be produced by nanoelectronics (with discrete phase zones). A second mini lens matrix 1270 can be added between the spectral filter 1260 and the CMOS pixels 102, so that the parallel rays are directed back to the photodiodes of the pixels to maximize the fill factor. So the camera is essentially a standard camera like the iSight but in which the CMOS sensor was replaced by a specially designed multi-layer sensor in which all components are integrated in a block within the same lithographic process. This multi-layer sensor is cheap in mass production, compact, robust and does not have to be aligned. Each of these five layers 1240, 1250, 1260, 1270, 102 has its own function to meet the requirements of the present invention.
Since the minimum angle of a cone produced by a lens with a diameter d is of the order of λ / d, where λ is the wavelength of the light, the minimum cone angle is 1/10 radial for a mini lens with diameter d = 8.5 μιη and λ = 850 nm. With a good quality spectral interference filter, this corresponds to a spectral window of approximately 3 nm.
Fig. 11 illustrates an alternative optical arrangement that includes a dome 1310 (e.g., a curved glass plate) with a narrow band pass filter 1320 disposed on the inside (as shown) or outside (not shown). The advantage of placing the filter 1320 on the inside of the dome 1310 is that the dome 1310 protects the filter 1320 against external forces. The dome 1310 and filter 1320 work optically together so that the incident light enters through the filter 1320 along a direction that is substantially perpendicular to the surface of the dome. The Fish-eye lens 1330 is placed between the dome filter arrangement and the sensor 102, which can be a CMOS or CCD sensor or SPAD matrix. The fish-eye optics 1330 is arranged to direct the light that passes through the dome-shaped filter arrangement to the sensitive surface of the sensor.
Fish-eye optics may also be provided on the projector side. In a specific embodiment, a plurality of VCSELs are mounted in a 1 x n or m x n configuration, whereby an output angle of the laser beam can be realized over a space angle of m x 1 rad in height and η x 1 rad in width.
In some embodiments of the present invention, the intensity of the spots can be kept substantially constant over the full depth range and this through a stepped or variable attenuation filter on the detector side. Alternatively or additionally, a non-symmetrical pupil lens can also be provided to attenuate the intensity of the spots closer to the detector, while the intensity of the spots fades further from the detector at full intensity. This prevents clipping of the detector and the average intensity of all the spots becomes considerably the same for all places.
In some embodiments, the radiation source may be a VCSEL that is divided into different zones, the laser ON time being controlled for the different zones. The images of the spots can thus be adjusted to have a constant intensity, e.g. 2 / 3rd of the A / D range. Alternatively, the voltage of series of spots can be driven as a function of the height, so as to also obtain a constant intensity. Such a check can be referred to as a loop to avoid saturation. The different VCSELs in the array can be checked individually for intensity, so as to vary the intensity of the individual VCSELs in the pattern while simultaneously projecting the spots.
In some other embodiments of the present invention, a micro prism matrix can be used for the narrowband filter such that the radiation enters at an angle of incidence between + 9 ° and -9 on the filter. This allows for narrow bandwidth filtering. The prism matrix can be made, for example, by using plastic injection molding.
In realization forms of the present invention, e.g. where the applications for active suspension are considered, the projection of the spot pattern is preferably directed downwards, i.e. towards the road.
A system according to the invention may include an implementation of steps of the methods described above performed in special hardware (e.g. ASIC), configurable hardware (e.g. FPGA), programmable components (e.g., a DSP or universal processor with suitable software), or a combination thereof. The same component (s) may also include other functions. The present invention also relates to a computer program product comprising the code means for realizing the execution of the steps described above, this product can be applied to a computer-readable medium such as an optical, magnetic or solid-state carrier.
The present invention also relates to a vehicle comprising the system described above.
Embodiments of the present invention can be used advantageously in a wide range of applications, including without limitation the automotive industry, industrial applications, gaming applications and the like, both indoors and outdoors, short or long distance. In some applications, different sensors according to the embodiment of the present invention can be combined (e.g., interconnected) to produce panoramic coverage, preferably over a full circle (360 ° field of view).
Although the invention has been described above with reference to separate systems of the method and embodiments, this was only done for clarification. Those skilled in the art will appreciate that elements described in connection with the system and method can also be applied to the method or system, respectively, with the same technical effects and advantages. Moreover, the scope of the invention is not limited to these realization forms, but is defined by the appended claims
权利要求:
Claims (10)
[1]
Conclusions
A system (200) for determining a distance to an object, comprising: a semiconductor light source (210) adapted to project a pattern of spots of laser light to said object in a series of pulses; a detector (220) containing a plurality of pixels, the detector (220) being configured to detect light containing the pattern of spots as reflected by the object in synchronism with said series of pulses; and processing means (240) configured to calculate said distance from the object as a function of read value generated by the pixels due to the detected light; wherein these pixels (220) are configured to generate said readout value by the accumulation for each pulse of the sequence, a first amount of electrical charge representative of a first amount of light reflected by the object during a first predetermined time window (10) and a second electrical charge representative of a second amount of light reflected by the object during a second predetermined time window (20), said second predetermined time window (20) occurs after the first predetermined time window (10).
[2]
The system according to claim 1, wherein the first predetermined time window and said second predetermined time window are largely the same length and occur back-to-back.
[3]
The system according to claim 1 or claim 2, wherein each of said plurality of pixels consists of at least two charge storage reservoirs, and wherein detecting the first amount of light and detecting the second amount of light respectively occurs at least two reservoirs of the said storage containers.
[4]
A vehicle with a system (100) according to any of the preceding claims adapted to measure at least a portion of an area around the vehicle.
[5]
A camera, the camera comprising a system (100) according to any of claims 1 to 3, wherein the system (100) is adapted to add 3D camera images based on data from the system, thereby enabling a 3D image create an image.
[6]
A method of determining a distance to an object, the method comprising: using a semiconductor light source (210) to project (110) a pattern of spots of laser light to said object in a series of pulses; using a detector (220) comprising a plurality of pixels (120; 130) to detect light from the pattern of spots as reflected by the object in synchronism with said series of pulses; and calculating (140) the distance to the object as a function of read value generated by the pixels in response to the detected light; wherein said pixels (220) generate said readout value by accumulating for each pulse of the sequence, a first amount of electrical charge representative of a first amount of light reflected by the object during a first predetermined time window (10) and a second amount of electrical charge representative of a second amount of light reflected by the object during a second predetermined time window (20), this second predetermined time window (20) is located after said first predetermined time window (10).
[7]
The method of claim 6, wherein the first predetermined time window and said second predetermined time window are largely the same and occur back-to-back.
[8]
The method according to claim 6 or claim 7, wherein each of said plurality of the image elements comprises at least two charge storage reservoirs, and wherein detecting the first amount of light and detecting the second amount of light respectively occurs at one of said cargo storage reservoirs.
[9]
The method of any one of claims 6-8, wherein the projecting (110), said detecting (120, 130), and said calculation (140) are repeated periodically.
[10]
A computer program product containing the code and adapted to cause a processor to execute one of claims 6-9.
类似技术:
公开号 | 公开日 | 专利标题
BE1023788B1|2017-07-26|System and method for determining the distance to an object
JP2018531374A6|2018-12-13|System and method for measuring distance to an object
EP3519860B1|2020-09-09|System and method for determining a distance to an object
US10852400B2|2020-12-01|System for determining a distance to an object
EP3519854B1|2020-09-09|Method for subtracting background light from an exposure value of a pixel in an imaging array, and pixel for use in same
US11029391B2|2021-06-08|System for determining a distance to an object
KR20200096828A|2020-08-13|Systems and methods for determining distance to objects
EP3550329A1|2019-10-09|System and method for determining a distance to an object
JP7028878B2|2022-03-02|A system for measuring the distance to an object
同族专利:
公开号 | 公开日
CN108139483A|2018-06-08|
JP6938472B2|2021-09-22|
JP2018531374A|2018-10-25|
US20180299554A1|2018-10-18|
BE1023788A1|2017-07-26|
WO2017068199A1|2017-04-27|
EP3159711A1|2017-04-26|
US10921454B2|2021-02-16|
EP3365700A1|2018-08-29|
KR20180073571A|2018-07-02|
EP3365700B1|2020-09-02|
CN108139483B|2022-03-01|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

DE69635858T2|1995-06-22|2006-11-30|3Dv Systems Ltd.|TELECENTRIC 3D CAMERA AND RELATED METHOD|
EP1152261A1|2000-04-28|2001-11-07|CSEM Centre Suisse d'Electronique et de Microtechnique SA|Device and method for spatially resolved photodetection and demodulation of modulated electromagnetic waves|
JP3832441B2|2002-04-08|2006-10-11|松下電工株式会社|Spatial information detection device using intensity-modulated light|
US6906302B2|2002-07-30|2005-06-14|Freescale Semiconductor, Inc.|Photodetector circuit device and method thereof|
US6888122B2|2002-08-29|2005-05-03|Micron Technology, Inc.|High dynamic range cascaded integration pixel cell and method of operation|
US6814171B2|2002-08-30|2004-11-09|Motorola, Inc.|Automotive drive assistance system and method|
DE10305010B4|2003-02-07|2012-06-28|Robert Bosch Gmbh|Apparatus and method for image formation|
JP4280822B2|2004-02-18|2009-06-17|国立大学法人静岡大学|Optical time-of-flight distance sensor|
GB0405014D0|2004-03-05|2004-04-07|Qinetiq Ltd|Movement control system|
IL181030A|2006-01-29|2012-04-30|Rafael Advanced Defense Sys|Time-space multiplexed ladar|
US7544945B2|2006-02-06|2009-06-09|Avago Technologies General Ip Pte. Ltd.|Vertical cavity surface emitting laser array laser scanner|
JP5171158B2|2007-08-22|2013-03-27|浜松ホトニクス株式会社|Solid-state imaging device and range image measuring device|
JP5356726B2|2008-05-15|2013-12-04|浜松ホトニクス株式会社|Distance sensor and distance image sensor|
JP5585903B2|2008-07-30|2014-09-10|国立大学法人静岡大学|Distance image sensor and method for generating imaging signal by time-of-flight method|
US8995485B2|2009-02-17|2015-03-31|Trilumina Corp.|High brightness pulsed VCSEL sources|
JP4473337B1|2009-07-31|2010-06-02|株式会社オプトエレクトロニクス|Optical information reading apparatus and optical information reading method|
DE102009037596B4|2009-08-14|2014-07-24|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Pixel structure, system and method for optical distance measurement and control circuit for the pixel structure|
JP5211007B2|2009-10-07|2013-06-12|本田技研工業株式会社|Photoelectric conversion element, light receiving device, light receiving system, and distance measuring device|
JP2011169701A|2010-02-17|2011-09-01|Sanyo Electric Co Ltd|Object detection device and information acquisition apparatus|
JP2011191221A|2010-03-16|2011-09-29|Sanyo Electric Co Ltd|Object detection device and information acquisition device|
US8736818B2|2010-08-16|2014-05-27|Ball Aerospace & Technologies Corp.|Electronically steered flash LIDAR|
JP2012083220A|2010-10-12|2012-04-26|Hamamatsu Photonics Kk|Distance sensor and distance image sensor|
US9329035B2|2011-12-12|2016-05-03|Heptagon Micro Optics Pte. Ltd.|Method to compensate for errors in time-of-flight range cameras caused by multiple reflections|
US8686367B2|2012-03-01|2014-04-01|Omnivision Technologies, Inc.|Circuit configuration and method for time of flight sensor|
CN202977967U|2012-12-25|2013-06-05|山东省科学院海洋仪器仪表研究所|Array package semiconductor laser lighting panel|
WO2014122714A1|2013-02-07|2014-08-14|パナソニック株式会社|Image-capturing device and drive method therefor|
US8908063B2|2013-03-11|2014-12-09|Texas Instruments Incorporated|Method and apparatus for a time-of-flight sensor with charge storage|
US10497737B2|2013-05-30|2019-12-03|Caeleste Cvba|Enhanced dynamic range imaging|
BE1021971B1|2013-07-09|2016-01-29|Xenomatix Nv|ENVIRONMENTAL SENSOR SYSTEM|
US20150260830A1|2013-07-12|2015-09-17|Princeton Optronics Inc.|2-D Planar VCSEL Source for 3-D Imaging|
US9443310B2|2013-10-09|2016-09-13|Microsoft Technology Licensing, Llc|Illumination modules that emit structured light|
CN108919294A|2013-11-20|2018-11-30|松下知识产权经营株式会社|Ranging camera system and solid-state imager|
US9182490B2|2013-11-27|2015-11-10|Semiconductor Components Industries, Llc|Video and 3D time-of-flight image sensors|
EP3104191B1|2014-02-07|2019-06-26|National University Corporation Shizuoka University|Charge modulation element and solid-state imaging device|
US9874638B2|2014-03-06|2018-01-23|University Of Waikato|Time of flight camera system which resolves direct and multi-path radiation components|
GB201407267D0|2014-04-24|2014-06-11|Cathx Res Ltd|Underwater surveys|
US9753140B2|2014-05-05|2017-09-05|Raytheon Company|Methods and apparatus for imaging in scattering environments|
EP3074721B1|2014-08-08|2021-05-19|CEMB S.p.A.|Vehicle equipment with scanning system for contactless measurement|
US10677923B2|2014-11-12|2020-06-09|Ams Sensors Singapore Pte. Ltd.|Optoelectronic modules for distance measurements and/or multi-dimensional imaging|
JP6478725B2|2015-03-09|2019-03-06|キヤノン株式会社|Measuring device and robot|
US20160295122A1|2015-04-03|2016-10-06|Canon Kabushiki Kaisha|Display control apparatus, display control method, and image capturing apparatus|
US20160295133A1|2015-04-06|2016-10-06|Heptagon Micro Optics Pte. Ltd.|Cameras having a rgb-ir channel|
CN107615093B|2015-05-28|2021-07-06|新唐科技日本株式会社|Distance measurement imaging device, distance measurement method thereof, and solid-state imaging device|
WO2016208214A1|2015-06-24|2016-12-29|株式会社村田製作所|Distance sensor|
EP3159711A1|2015-10-23|2017-04-26|Xenomatix NV|System and method for determining a distance to an object|US9992477B2|2015-09-24|2018-06-05|Ouster, Inc.|Optical system for collecting distance information within a field|
EP3159711A1|2015-10-23|2017-04-26|Xenomatix NV|System and method for determining a distance to an object|
US20180341009A1|2016-06-23|2018-11-29|Apple Inc.|Multi-range time of flight sensing|
AU2017315762B2|2016-08-24|2020-04-09|Ouster, Inc.|Optical system for collecting distance information within a field|
EP3301477A1|2016-10-03|2018-04-04|Xenomatix NV|System for determining a distance to an object|
EP3301478A1|2016-10-03|2018-04-04|Xenomatix NV|System for determining a distance to an object|
WO2018142878A1|2017-02-06|2018-08-09|パナソニックIpマネジメント株式会社|Three-dimensional motion acquisition device and three-dimensional motion acquisition method|
US11105925B2|2017-03-01|2021-08-31|Ouster, Inc.|Accurate photo detector measurements for LIDAR|
EP3589990A4|2017-03-01|2021-01-20|Ouster, Inc.|Accurate photo detector measurements for lidar|
EP3596492A4|2017-03-13|2020-12-16|Opsys Tech Ltd|Eye-safe scanning lidar system|
US11151447B1|2017-03-13|2021-10-19|Zoox, Inc.|Network training process for hardware definition|
US10520590B2|2017-04-18|2019-12-31|Bae Systems Information And Electronic Systems Integration Inc.|System and method for ranging a target with a digital-pixel focal plane array|
EP3392674A1|2017-04-23|2018-10-24|Xenomatix NV|A pixel structure|
JP2020521954A|2017-05-15|2020-07-27|アウスター インコーポレイテッド|Optical imaging transmitter with enhanced brightness|
US10775501B2|2017-06-01|2020-09-15|Intel Corporation|Range reconstruction using shape prior|
US10830879B2|2017-06-29|2020-11-10|Apple Inc.|Time-of-flight depth mapping with parallax compensation|
US10754033B2|2017-06-30|2020-08-25|Waymo Llc|Light detection and rangingdevice range aliasing resilience by multiple hypotheses|
DE102017115385A1|2017-07-10|2019-01-10|Basler Ag|Device and method for detecting a three-dimensional depth image|
KR102326508B1|2017-07-28|2021-11-17|옵시스 테크 엘티디|Vcsel array lidar transmitter with small angular divergence|
US10627492B2|2017-08-01|2020-04-21|Waymo Llc|Use of extended detection periods for range aliasing detection and mitigation in a light detection and rangingsystem|
WO2019041268A1|2017-08-31|2019-03-07|SZ DJI Technology Co., Ltd.|A solid state light detection and rangingsystem|
US20190072771A1|2017-09-05|2019-03-07|Facebook Technologies, Llc|Depth measurement using multiple pulsed structured light projectors|
US10585176B2|2017-09-19|2020-03-10|Rockwell Automation Technologies, Inc.|Pulsed-based time of flight methods and system|
US10663565B2|2017-09-19|2020-05-26|Rockwell Automation Technologies, Inc.|Pulsed-based time of flight methods and system|
US10955552B2|2017-09-27|2021-03-23|Apple Inc.|Waveform design for a LiDAR system with closely-spaced pulses|
EP3470872B1|2017-10-11|2021-09-08|Melexis Technologies NV|Sensor device|
DE102017222017A1|2017-12-06|2019-06-06|Robert Bosch Gmbh|Method and system for determining and providing a soil profile|
EP3625589B1|2017-12-15|2020-11-18|Xenomatix NV|System and method for determining a distance to an object|
EP3550329A1|2018-04-04|2019-10-09|Xenomatix NV|System and method for determining a distance to an object|
US11002836B2|2018-05-14|2021-05-11|Rockwell Automation Technologies, Inc.|Permutation of measuring capacitors in a time-of-flight sensor|
US10996324B2|2018-05-14|2021-05-04|Rockwell Automation Technologies, Inc.|Time of flight system and method using multiple measuring sequences|
US10969476B2|2018-07-10|2021-04-06|Rockwell Automation Technologies, Inc.|High dynamic range for sensing systems and methods|
US20200116830A1|2018-08-09|2020-04-16|Ouster, Inc.|Channel-specific micro-optics for optical arrays|
US10739189B2|2018-08-09|2020-08-11|Ouster, Inc.|Multispectral ranging/imaging sensor arrays and systems|
CN108957470B|2018-08-22|2021-02-26|上海炬佑智能科技有限公司|Time-of-flight ranging sensor and ranging method thereof|
US10789506B2|2018-09-24|2020-09-29|Rockwell Automation Technologies, Inc.|Object intrusion detection system and method|
CN109636857B|2018-10-16|2021-10-15|歌尔光学科技有限公司|Alignment method and calibration system|
DE102018126841B4|2018-10-26|2021-05-06|Sick Ag|3D time-of-flight camera and method for capturing three-dimensional image data|
US10855896B1|2018-12-13|2020-12-01|Facebook Technologies, Llc|Depth determination using time-of-flight and camera assembly with augmented pixels|
US10791286B2|2018-12-13|2020-09-29|Facebook Technologies, Llc|Differentiated imaging using camera assembly with augmented pixels|
US10791282B2|2018-12-13|2020-09-29|Fenwick & West LLP|High dynamic range camera assembly with augmented pixels|
US10955234B2|2019-02-11|2021-03-23|Apple Inc.|Calibration of depth sensing using a sparse array of pulsed beams|
WO2020179839A1|2019-03-05|2020-09-10|浜松ホトニクス株式会社|Light-receiving device and method for manufacturing light-receiving device|
EP3789787A1|2019-09-03|2021-03-10|Xenomatix NV|Solid-state lidar system for determining distances to a scene|
WO2021043851A1|2019-09-03|2021-03-11|Xenomatix Nv|Projector for a solid-state lidar system|
EP3798673A1|2019-09-25|2021-03-31|Xenomatix NV|Method and device for determining distances to a scene|
CN110673152A|2019-10-29|2020-01-10|炬佑智能科技有限公司|Time-of-flight sensor and distance measuring method thereof|
CN110673153A|2019-10-29|2020-01-10|炬佑智能科技有限公司|Time-of-flight sensor and distance measuring method thereof|
US10902623B1|2019-11-19|2021-01-26|Facebook Technologies, Llc|Three-dimensional imaging with spatial and temporal coding for depth camera assembly|
DE102019131988A1|2019-11-26|2021-05-27|Sick Ag|3D time-of-flight camera and method for capturing three-dimensional image data|
US11194160B1|2020-01-21|2021-12-07|Facebook Technologies, Llc|High frame rate reconstruction with N-tap camera sensor|
JP2021120630A|2020-01-30|2021-08-19|ソニーセミコンダクタソリューションズ株式会社|Distance measuring device and distance measuring method|
CN111880189A|2020-08-12|2020-11-03|中国海洋大学|Continuous optical range gated lidar|
法律状态:
2017-10-30| FG| Patent granted|Effective date: 20170726 |
优先权:
申请号 | 申请日 | 专利标题
EP15191288.8|2015-10-23|
EP15191288.8A|EP3159711A1|2015-10-23|2015-10-23|System and method for determining a distance to an object|
[返回顶部]